Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
建模是什么使广告有说服力的原因,即引起消费者的所需响应,对于宣传,社会心理学和营销的研究至关重要。尽管其重要性,但计算机视觉中说服力的计算建模仍处于起步阶段,这主要是由于缺乏可以提供与ADS相关的说服力标签的基准数据集。由社会心理学和市场营销中的说服文学的激励,我们引入了广泛的说服策略词汇,并建立了用说服策略注释的第一个AD图像语料库。然后,我们通过多模式学习制定说服策略预测的任务,在该任务中,我们设计了一个多任务注意融合模型,该模型可以利用其他广告理解的任务来预测说服策略。此外,我们对30家财富500家公司的1600个广告活动进行了真实的案例研究,我们使用模型的预测来分析哪些策略与不同的人口统计学(年龄和性别)一起使用。该数据集还提供图像分割掩码,该蒙版在测试拆分上标记了相应的AD图像中的说服力策略。我们公开发布代码和数据集https://midas-research.github.io/persuasion-avertisements/。
translated by 谷歌翻译
随着未来以数据为中心的决策,对数据库的无缝访问至关重要。关于创建有效的文本到SQL(Text2SQL)模型以访问数据库的数据有广泛的研究。使用自然语言是可以通过有效访问数据库(尤其是对于非技术用户)来弥合数据和结果之间差距的最佳接口之一。它将打开门,并在精通技术技能或不太熟练的查询语言的用户中引起极大的兴趣。即使提出或研究了许多基于深度学习的算法,在现实工作场景中使用自然语言来解决数据查询问题仍然非常具有挑战性。原因是在不同的研究中使用不同的数据集,这带来了其局限性和假设。同时,我们确实缺乏对这些提议的模型及其对其训练的特定数据集的局限性的彻底理解。在本文中,我们试图介绍过去几年研究的24种神经网络模型的整体概述,包括其涉及卷积神经网络,经常性神经网络,指针网络,强化学习,生成模型等的架构。我们还概述11个数据集,这些数据集被广泛用于训练Text2SQL技术的模型。我们还讨论了无缝数据查询中文本2SQL技术的未来应用可能性。
translated by 谷歌翻译
我们探索使用机器学习的眼光估算技术。眼目光估计是各种行为分析和人类计算机界面的常见问题。这项工作的目的是讨论各种模型类型,以进行眼睛凝视估计,并通过在不受约束的环境中使用眼标预测凝视方向的结果。在不受限制的现实世界中,由于照明变化和其他视觉伪像等因素,基于特征和基于模型的方法的表现优于最近的基于外观的方法。我们讨论了一种基于学习的基于学习的方法,该方法专门针对合成数据培训。我们讨论了如何使用检测到的地标作为迭代模型拟合和轻巧学习的凝视估计方法的输入,以及如何将模型用于与人无关和个性化的凝视估计。
translated by 谷歌翻译
在弱监督学习(WSL)中,对从语义规则和特定于任务的预训练模型获得的嘈杂标签进行了训练。规则对任务的概括有限,并且需要大量的手动工作,而预培训模型仅适用于有限任务。在这项工作中,我们建议利用基于及时的方法作为弱来源,以获取未注释数据的嘈杂标签。我们表明,任务不合时宜的提示是可以推广的,可用于获取用于不同口语理解(SLU)任务的嘈杂标签,例如情感分类,不足的检测和情感分类。这些提示还可以更新以添加特定于任务的上下文,从而为设计特定于任务的提示提供灵活性。我们证明,基于及时的方法为上述SLU任务生成可靠的标签,因此可以用作通用弱源在没有标记数据的情况下训练弱监督模型(WSM)。我们提出的WSL管道对基于迅速的弱源进行了训练,在所有三个基准SLU数据集上,对零F1的零型学习和少量学习的其他竞争性低资源基准优于其他竞争性低资源基准。所提出的方法还优于传统的基于规则的WSL管道在宏F1上的表现超过5%。
translated by 谷歌翻译
面向目标的对话系统的核心组件之一是意图检测的任务。由于可用的附带话语的稀缺性,目的检测时的几次射门学习是挑战。尽管最近的作品已经提出了使用基于度量的基于优化的方法,但任务仍然在大标签空间中挑战,射击数量小得多。由于在测试阶段,由于两种新颖和看到的课程存在,概括的少量学习更加困难。在这项工作中,我们提出了一种基于自然语言推理的简单有效的方法,不仅解决了几次射击意图检测问题,而且在零射击和广义少数射击学习问题中证明是有用的。我们对许多自然语言理解(NLU)和口语理解(SLU)数据集的大量实验表明了我们的方法的有效性。此外,我们突出了我们基于NLI的方法的设置,通过巨大的利润率优于基线。
translated by 谷歌翻译
几乎所有用于计算机视觉任务的最先进的神经网络都受到(1)在目标数据集上的大规模数据集和(2)FINETUNING上的预培训(1)预培训。该策略有助于减少对目标数据集的依赖,并提高目标任务的收敛速率和泛化。虽然对大型数据集进行预训练非常有用,但其最重要的缺点是高培训成本。要解决此问题,我们提出了有效的过滤方法,以从训练前的数据集中选择相关子集。此外,我们发现,在训练前的图像分辨率降低图像分辨率在成本和性能之间提供了很大的权衡。我们通过在无监督和监督设置中的想象中进行预测,并在各种目标数据集和任务集合中进行预测,通过预先培训来验证我们的技术。我们提出的方法大大降低了预训练成本并提供了强大的性能提升。最后,我们通过在我们的子集上调整可用模型来提高标准ImageNet预培训1-3%,并在从更大的规模数据集中过滤的数据集上进行预训练。
translated by 谷歌翻译
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
translated by 谷歌翻译
Searching long egocentric videos with natural language queries (NLQ) has compelling applications in augmented reality and robotics, where a fluid index into everything that a person (agent) has seen before could augment human memory and surface relevant information on demand. However, the structured nature of the learning problem (free-form text query inputs, localized video temporal window outputs) and its needle-in-a-haystack nature makes it both technically challenging and expensive to supervise. We introduce Narrations-as-Queries (NaQ), a data augmentation strategy that transforms standard video-text narrations into training data for a video query localization model. Validating our idea on the Ego4D benchmark, we find it has tremendous impact in practice. NaQ improves multiple top models by substantial margins (even doubling their accuracy), and yields the very best results to date on the Ego4D NLQ challenge, soundly outperforming all challenge winners in the CVPR and ECCV 2022 competitions and topping the current public leaderboard. Beyond achieving the state-of-the-art for NLQ, we also demonstrate unique properties of our approach such as gains on long-tail object queries, and the ability to perform zero-shot and few-shot NLQ.
translated by 谷歌翻译
Machine Translation (MT) system generally aims at automatic representation of source language into target language retaining the originality of context using various Natural Language Processing (NLP) techniques. Among various NLP methods, Statistical Machine Translation(SMT). SMT uses probabilistic and statistical techniques to analyze information and conversion. This paper canvasses about the development of bilingual SMT models for translating English to fifteen low-resource Indian Languages (ILs) and vice versa. At the outset, all 15 languages are briefed with a short description related to our experimental need. Further, a detailed analysis of Samanantar and OPUS dataset for model building, along with standard benchmark dataset (Flores-200) for fine-tuning and testing, is done as a part of our experiment. Different preprocessing approaches are proposed in this paper to handle the noise of the dataset. To create the system, MOSES open-source SMT toolkit is explored. Distance reordering is utilized with the aim to understand the rules of grammar and context-dependent adjustments through a phrase reordering categorization framework. In our experiment, the quality of the translation is evaluated using standard metrics such as BLEU, METEOR, and RIBES
translated by 谷歌翻译